659 research outputs found

    A Systems Factorial Technology Dataset using Visual and Tactile Cues to Guide Balance

    Get PDF
    This data contains response times for 19 participants from a Systems Factorial Technology paradigm using visual and vibratory cues, as described in The Balance Between Vision and Touch [1] . These cues could indicate one of four directions, and participants responded by shifting their weight in that direction. This was detected using a Wii Balance Board. Each participant has 720 trials: 1/3 with only a haptic cue, 1/3 with only vision, and 1/3 with both. Cues were equally divided into high and low salience versions

    The Many Faces of Garner Interference

    Get PDF
    Thesis (Ph.D.) - Indiana University, Psychological & Brain Sciences, 2014A series of speeded classification tasks proposed by Garner (1974) has become a well-entrenched method for identifying interactions between perceptual dimensions. The theory proposes that integral dimensions should produce a redundancy gain when a second dimension covaries perfectly with the attended dimension, and interference if the second dimension varies irrelevantly. This work questions the interpretation of such results as indicating interactive dimensions, reviewing independent models which naturally exhibit such effects. Furthermore, there are several methodological confounds which make the cause of Garner interference non-identifiable in the standard experimental context, the most serious of which is the conflation of changes in the number of stimuli with changes in the number of irrelevant dimensions. Here is proposed a novel three-dimensional extension of the Garner paradigm capable of disambiguating these experimental factors, which includes several conditions designed to help distinguish between various competing models of the related phenomena. This new paradigm was implemented with two stimulus sets, both composed of known integral dimensions, but from opposite sides of the complexity spectrum: color patches differing in their saturation, brightness, and hue; and faces differing in weight, age, and gender. Results show typical Garner interference effects for both stimulus sets, although the redundancy gains were rather modest. When a three-dimensional analog of the Garner filtering test is created by allowing a second irrelevant dimension to vary, however, the expected interference effects do not appear. Counter-intuitively, this additional variation often leads to an improvement in performance, an effect which cannot be predicted by the extant models. This effect is shown to be driven primarily by the extra dimension of variation rather than the additional stimuli. The implications for these (and other) findings are considered with regards to the utility of the Garner paradigm and the models that have attempted to describe it

    The Many Faces of Garner Interference

    Get PDF
    A series of speeded classification tasks proposed by Garner (1974) has become a well-entrenched method for identifying interactions between perceptual dimensions. The theory proposes that integral dimensions should produce a redundancy gain when a second dimension covaries perfectly with the attended dimension, and interference if the second dimension varies irrelevantly. This work questions the interpretation of such results as indicating interactive dimensions, reviewing independent models which naturally exhibit such effects. Furthermore, there are several methodological confounds which make the cause of Garner interference non-identifiable in the standard experimental context, the most serious of which is the conflation of changes in the number of stimuli with changes in the number of irrelevant dimensions. Here is proposed a novel three-dimensional extension of the Garner paradigm capable of disambiguating these experimental factors, which includes several conditions designed to help distinguish between various competing models of the related phenomena. This new paradigm was implemented with two stimulus sets, both composed of known integral dimensions, but from opposite sides of the complexity spectrum: color patches differing in their saturation, brightness, and hue; and faces differing in weight, age, and gender. Results show typical Garner interference effects for both stimulus sets, although the redundancy gains were rather modest. When a three-dimensional analog of the Garner filtering test is created by allowing a second irrelevant dimension to vary, however, the expected interference effects do not appear. Counter-intuitively, this additional variation often leads to an improvement in performance, an effect which cannot be predicted by the extant models. This effect is shown to be driven primarily by the extra dimension of variation rather than the additional stimuli. The implications for these (and other) findings are considered with regards to the utility of the Garner paradigm and the models that have attempted to describe it

    Measurement Effects in Decision-Making

    Get PDF
    When participants are shown a series of stimuli, their responses differ depending on whether they respond after each stimulus or only at the end of the series, in what we call a measurement effect. These effects have received paltry attention compared with more well-known order effects and pose a unique challenge to theories of decision-making. In a series of two preregistered experiments, we consistently find measurement effects such that responding to a stimulus reduces its impact on later stimuli. While previous research has found such effects in noncumulative tasks, where participants are instructed only to respond to the most recent stimulus, this may be the first demonstration of these effects when participants are asked to combine information across either two or four stimuli. We present modeling results showing that although several extant classical and quantum models fail to predict the direction of these effects, new versions can be created that can do so. Ways in which these effects can be described using either quantum or classical models are discussed, as well as potential connections with other well-known phenomena like the dilution effect

    Trunk Velocity-Dependent Light Touch Reduces Postural Sway during Standing

    Get PDF
    Light Touch (LT) has been shown to reduce postural sway in a wide range of populations. While LT is believed to provide additional sensory information for balance modulation, the nature of this information and its specific effect on balance are yet unclear. In order to better understand LT and to potentially harness its advantages for a practical balance aid, we investigated the effect of LT as provided by a haptic robot. Postural sway during standing balance was reduced when the LT force (~ 1 N) applied to the high back area was dependent on the trunk velocity. Additional information on trunk position, provided through orthogonal vibrations, further reduced the sway position-metric of balance but did not further improve the velocity-metric of balance. Our results suggest that limited and noisy information on trunk velocity encoded in LT is sufficient to influence standing balance. © 2019 Saini et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

    Optimizing Exterior Lighting Illuminance and Spectrum for Human, Environmental, and Economic Factors.

    Get PDF
    With the recent widespread adoption of LED lighting in outdoor areas, numerous concerns have been raised about the potential for harmful effects on humans, animals, plants, and the night sky. These stem from the high blue light content of some LED bulbs and an incentive to increase lighting levels caused by higher efficiency and lower costs. While new lighting installations are often described as environmentally friendly due to their energy efficiency, factors such as light pollution are often neglected or not given enough weight. This research focuses on optimizing the design of exterior lighting for human, environmental, and economic factors using a multi-criteria decision analysis. Based on data in the literature and survey research, illuminance and spectrum alternatives were scored relative to each other using the analytic hierarchy process and multi-attribute utility theory. The findings of this study support the use of artificial illumination at levels similar to a full moon (0.01 fc) and a warm white spectrum (2700K or 2200K), with amber LED becoming a better choice if its energy efficiency and cost effectiveness improve in the future. This methodology can be used in the future as a framework for lighting design optimization in different settings

    Guiding a Human Follower with Interaction Forces: Implications on Physical Human-Robot Interaction

    Get PDF
    This work challenges the common assumption in physical human-robot interaction (pHRI) that the movement intention of a human user can be simply modeled with dynamic equations relating forces to movements, regardless of the user. Studies in physical human-human interaction (pHHI) suggest that interaction forces carry sophisticated information that reveals motor skills and roles in the partnership and even promotes adaptation and motor learning. In this view, simple force-displacement equations often used in pHRI studies may not be sufficient. To test this, this work measured and analyzed the interaction forces (F) between two humans as the leader guided the blindfolded follower on a randomly chosen path. The actual trajectory of the follower was transformed to the velocity commands (V) that would allow a hypothetical robot follower to track the same trajectory. Then, possible analytical relationships between F and V were obtained using neural network training. Results suggest that while F helps predict V, the relationship is not straightforward, that seemingly irrelevant components of F may be important, that force-velocity relationships are unique to each human follower, and that human neural control of movement may affect the prediction of the movement intent. It is suggested that user-specific, stereotype-free controllers may more accurately decode human intent in pHRI

    Catching Element Formation In The Act

    Full text link
    Gamma-ray astronomy explores the most energetic photons in nature to address some of the most pressing puzzles in contemporary astrophysics. It encompasses a wide range of objects and phenomena: stars, supernovae, novae, neutron stars, stellar-mass black holes, nucleosynthesis, the interstellar medium, cosmic rays and relativistic-particle acceleration, and the evolution of galaxies. MeV gamma-rays provide a unique probe of nuclear processes in astronomy, directly measuring radioactive decay, nuclear de-excitation, and positron annihilation. The substantial information carried by gamma-ray photons allows us to see deeper into these objects, the bulk of the power is often emitted at gamma-ray energies, and radioactivity provides a natural physical clock that adds unique information. New science will be driven by time-domain population studies at gamma-ray energies. This science is enabled by next-generation gamma-ray instruments with one to two orders of magnitude better sensitivity, larger sky coverage, and faster cadence than all previous gamma-ray instruments. This transformative capability permits: (a) the accurate identification of the gamma-ray emitting objects and correlations with observations taken at other wavelengths and with other messengers; (b) construction of new gamma-ray maps of the Milky Way and other nearby galaxies where extended regions are distinguished from point sources; and (c) considerable serendipitous science of scarce events -- nearby neutron star mergers, for example. Advances in technology push the performance of new gamma-ray instruments to address a wide set of astrophysical questions.Comment: 14 pages including 3 figure

    Changing the Paradigm for Management of Pediatric Primary Spontaneous Pneumothorax: A Simple Aspiration Test Predicts Need for Operation

    Get PDF
    Purpose Chest tube (CT) management for pediatric primary spontaneous pneumothorax (PSP) is associated with long hospital stays and high recurrence rates. To streamline management, we explored simple aspiration as a test to predict need for surgery. Methods A multi-institution, prospective pilot study of patients with first presentation for PSP at 9 children’s hospitals was performed. Aspiration was performed through a pigtail catheter, followed by 6 h observation with CT clamped. If pneumothorax recurred during observation, the aspiration test failed and subsequent management was per surgeon discretion. Results Thirty-three patients were managed with simple aspiration. Aspiration was successful in 16 of 33 (48%), while 17 (52%) failed the aspiration test and required hospitalization. Twelve who failed aspiration underwent CT management, of which 10 (83%) failed CT management owing to either persistent air leak requiring VATS or subsequent PSP recurrence. Recurrence rate was significantly greater in the group that failed aspiration compared to the group that passed aspiration [10/12 (83%) vs 7/16 (44%), respectively, P = 0.028]. Conclusion Simple aspiration test upon presentation with PSP predicts chest tube failure with 83% positive predictive value. We recommend changing the PSP management algorithm to include an initial simple aspiration test, and if that fails, proceed directly to VATS

    Optimasi Portofolio Resiko Menggunakan Model Markowitz MVO Dikaitkan dengan Keterbatasan Manusia dalam Memprediksi Masa Depan dalam Perspektif Al-Qur`an

    Full text link
    Risk portfolio on modern finance has become increasingly technical, requiring the use of sophisticated mathematical tools in both research and practice. Since companies cannot insure themselves completely against risk, as human incompetence in predicting the future precisely that written in Al-Quran surah Luqman verse 34, they have to manage it to yield an optimal portfolio. The objective here is to minimize the variance among all portfolios, or alternatively, to maximize expected return among all portfolios that has at least a certain expected return. Furthermore, this study focuses on optimizing risk portfolio so called Markowitz MVO (Mean-Variance Optimization). Some theoretical frameworks for analysis are arithmetic mean, geometric mean, variance, covariance, linear programming, and quadratic programming. Moreover, finding a minimum variance portfolio produces a convex quadratic programming, that is minimizing the objective function ðð¥with constraintsð ð 𥠥 ðandð´ð¥ = ð. The outcome of this research is the solution of optimal risk portofolio in some investments that could be finished smoothly using MATLAB R2007b software together with its graphic analysis
    corecore